13 research outputs found

    Why Does Flow Director Cause Packet Reordering?

    Full text link
    Intel Ethernet Flow Director is an advanced network interface card (NIC) technology. It provides the benefits of parallel receive processing in multiprocessing environments and can automatically steer incoming network data to the same core on which its application process resides. However, our analysis and experiments show that Flow Director cannot guarantee in-order packet delivery in multiprocessing environments. Packet reordering causes various negative impacts. E.g., TCP performs poorly with severe packet reordering. In this paper, we use a simplified model to analyze why Flow Director can cause packet reordering. Our experiments verify our analysis

    SDN for End-to-End Networked Science at the Exascale (SENSE)

    No full text
    The Software-defined network for End-to-end Networked Science at Exascale (SENSE) research project is building smart network services to accelerate scientific discovery in the era of `big data' driven by Exascale, cloud computing, machine learning and AI. The project's architecture, models, and demonstrated prototype define the mechanisms needed to dynamically build end-to-end virtual guaranteed networks across administrative domains, with no manual intervention. In addition, a highly intuitive `intent' based interface, as defined by the project, allows applications to express their high-level service requirements, and an intelligent, scalable model-based software orchestrator converts that intent into appropriate network services, configured across multiple types of devices. The significance of these capabilities is the ability for science applications to manage the network as a first-class schedulable resource akin to instruments, compute, and storage, to enable well defined and highly tuned complex workflows that require close coupling of resources spread across a vast geographic footprint such as those used in science domains like high-energy physics and basic energy sciences

    Deploying distributed network monitoring mesh for lhc tier-1 and tier-2 sites

    No full text
    Abstract. Fermilab hosts the US Tier-1 center for data storage and analysis of the Large Hadron Collider’s (LHC) Compact Muon Solenoid (CMS) experiment. To satisfy operational requirements for the LHC networking model, the networking group at Fermilab, in collaboration with Internet2 and ESnet, is participating in the perfSONAR-PS project. This collaboration has created a collection of network monitoring services targeted at providing continuous network performance measurements across wide-area distributed computing environments. The perfSONAR-PS services are packaged as a bundle, and include a bootable disk capability. We have started on a deployment plan consisting of a decentralized mesh of these network monitoring services at US LHC Tier-1 and Tier-2 sites. The initial deployment will cover all Tier-1 and Tier2 sites of US ATLAS and US CMS. This paper will outline the basic architecture of each network monitoring service. Service discovery model, interoperability, and basic protocols will be presented. The principal deployment model and available packaging options will be detailed. The current state of deployment and availability of higher level user interfaces and analysis tools will be also be demonstrated. 1
    corecore